Goto

Collaborating Authors

 uncertainty model


Reliable Statistical Guarantees for Conformal Predictors with Small Datasets

Sánchez-Domínguez, Miguel, Lacasa, Lucas, de Vicente, Javier, Rubio, Gonzalo, Valero, Eusebio

arXiv.org Machine Learning

Surrogate models (including deep neural networks and other machine learning algorithms in supervised learning) are capable of approximating arbitrarily complex, high-dimensional input-output problems in science and engineering, but require a thorough data-agnostic uncertainty quantification analysis before these can be deployed for any safety-critical application. The standard approach for data-agnostic uncertainty quantification is to use conformal prediction (CP), a well-established framework to build uncertainty models with proven statistical guarantees that do not assume any shape for the error distribution of the surrogate model. However, since the classic statistical guarantee offered by CP is given in terms of bounds for the marginal coverage, for small calibration set sizes (which are frequent in realistic surrogate modelling that aims to quantify error at different regions), the potentially strong dispersion of the coverage distribution around its average negatively impacts the relevance of the uncertainty model's statistical guarantee, often obtaining coverages below the expected value, resulting in a less applicable framework. After providing a gentle presentation of uncertainty quantification for surrogate models for machine learning practitioners, in this paper we bridge the gap by proposing a new statistical guarantee that offers probabilistic information for the coverage of a single conformal predictor. We show that the proposed framework converges to the standard solution offered by CP for large calibration set sizes and, unlike the classic guarantee, still offers relevant information about the coverage of a conformal predictor for small data sizes. We validate the methodology in a suite of examples, and implement an open access software solution that can be used alongside common conformal prediction libraries to obtain uncertainty models that fulfil the new guarantee.





From Predictions to Decisions: Using L kahead Regularization

Neural Information Processing Systems

But when deployed transparently, learned models also affect how users act in order to improve outcomes. The standard approach to learning predictive models is agnostic to induced user actions and provides no guarantees as to the effect of actions.


Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles

Wu, Qiyuan, Campbell, Mark

arXiv.org Artificial Intelligence

The uncertainty quantification of sensor measurements coupled with deep learning networks is crucial for many robotics systems, especially for safety-critical applications such as self-driving cars. This paper develops an uncertainty quantification approach in the context of visual localization for autonomous driving, where locations are selected based on images. Key to our approach is to learn the measurement uncertainty using light-weight sensor error model, which maps both image feature and semantic information to 2-dimensional error distribution. Our approach enables uncertainty estimation conditioned on the specific context of the matched image pair, implicitly capturing other critical, unannotated factors (e.g., city vs highway, dynamic vs static scenes, winter vs summer) in a latent manner. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting and weather (sunny, night, snowy). Both the uncertainty quantification of the sensor+network is evaluated, along with Bayesian localization filters using unique sensor gating method. Results show that the measurement error does not follow a Gaussian distribution with poor weather and lighting conditions, and is better predicted by our Gaussian Mixture model.


A Helping (Human) Hand in Kinematic Structure Estimation

Pfisterer, Adrian, Li, Xing, Mengers, Vito, Brock, Oliver

arXiv.org Artificial Intelligence

Visual uncertainties such as occlusions, lack of texture, and noise present significant challenges in obtaining accurate kinematic models for safe robotic manipulation. We introduce a probabilistic real-time approach that leverages the human hand as a prior to mitigate these uncertainties. By tracking the constrained motion of the human hand during manipulation and explicitly modeling uncertainties in visual observations, our method reliably estimates an object's kinematic model online. We validate our approach on a novel dataset featuring challenging objects that are occluded during manipulation and offer limited articulations for perception. The results demonstrate that by incorporating an appropriate prior and explicitly accounting for uncertainties, our method produces accurate estimates, outperforming two recent baselines by 195% and 140%, respectively. Furthermore, we demonstrate that our approach's estimates are precise enough to allow a robot to manipulate even small objects safely.


MAD-BA: 3D LiDAR Bundle Adjustment -- from Uncertainty Modelling to Structure Optimization

Ćwian, Krzysztof, Di Giammarino, Luca, Ferrari, Simone, Ciarfuglia, Thomas, Grisetti, Giorgio, Skrzypczyński, Piotr

arXiv.org Artificial Intelligence

The joint optimization of sensor poses and 3D structure is fundamental for state estimation in robotics and related fields. Current LiDAR systems often prioritize pose optimization, with structure refinement either omitted or treated separately using representations like signed distance functions or neural networks. This paper introduces a framework for simultaneous optimization of sensor poses and 3D map, represented as surfels. A generalized LiDAR uncertainty model is proposed to address degraded or less reliable measurements in varying scenarios. Experimental results on public datasets demonstrate improved performance over most comparable state-of-the-art methods. The system is provided as open-source software to support further research.


Towards certification: A complete statistical validation pipeline for supervised learning in industry

Lacasa, Lucas, Pardo, Abel, Arbelo, Pablo, Sánchez, Miguel, Yeste, Pablo, Bascones, Noelia, Martínez-Cava, Alejandro, Rubio, Gonzalo, Gómez, Ignacio, Valero, Eusebio, de Vicente, Javier

arXiv.org Artificial Intelligence

The field of Machine Learning (ML) [1, 2] and its broad spectrum of applications has revolutionized a plethora of technological industries in recent years ranging from the energy sector or material sciences to telecommunications, finance or consumer goods, to cite some [3]. In the context of aeronautical engineering and aerospace technologies, the field has embraced ML tools only in recent years, and impact is growing at a rapid pace, ranging from generalpurpose ML-based fluid mechanics [4-6], aeroacoustics [7], wind turbines [8] or aerostructures [9] (including prediction of landing gear loads [10]) to flight trajectories optimization [11] or enhancing predictive maintenance [12, 13]: see the recent and illuminating reviews [14, 15] and references therein. Interestingly, the integration of ML-related tools and ideas in the aeronautical and aerospace industries is still in its infancy. Part of the reason is that any new technology has a necessary adoption curve [16, 17], and the fact that ML-solutions require expert knowledge at the crossroads of computer science and statistics -and a sophisticated operationalization infrastructure (MLOps) [18] - does not facilitate this adoption. However, a deeper reason is probably impeding faster adoption: while ML-technologies promise high performance and reduction in development and operating costs [19] (e.g. by reducing costs related to expensive and lengthy wind tunnel experiments and numerical simulations), ensuring adequate safety remains paramount in aeronautical industries, and ML-based tools are often seen as sophisticated black-boxes that suffer from low degree of trustability, and thus difficult to validate their safety. Therefore, air safety authorities demand rigorous validation and verification processes for these models, and industry leaders have started to propose guidelines and a roadmap on concepts of design assurance for neural network-related technologies [20-22]. However, only very recently industry has started to embrace the complexities of certifying ML models [23-27], prompting the initiation of discussions around the development of guidelines and a roadmap for design assurance, especially concerning network-related technologies. This pressing need underscores the imperative for collaborative efforts within the industry to establish robust validation frameworks that not only meet regulatory standards but also address the evolving challenges posed by ML integration. This has indeed been well understood and undertaken by Airbus who has established an internal working group on verification and validation of surrogate models in the frame of loads and stress domains.


Generalisation of Total Uncertainty in AI: A Theoretical Study

Shariatmadar, Keivan

arXiv.org Machine Learning

AI has been dealing with uncertainty to have highly accurate results. This becomes even worse with reasonably small data sets or a variation in the data sets. This has far-reaching effects on decision-making, forecasting and learning mechanisms. This study seeks to unpack the nature of uncertainty that exists within AI by drawing ideas from established works, the latest developments and practical applications and provide a novel total uncertainty definition in AI. From inception theories up to current methodologies, this paper provides an integrated view of dealing with better total uncertainty as well as complexities of uncertainty in AI that help us understand its meaning and value across different domains.